Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 69
Filter
1.
Res Synth Methods ; 15(2): 275-287, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38152969

ABSTRACT

In Bayesian random-effects meta-analysis, the use of weakly informative prior distributions is of particular benefit in cases where only a few studies are included, a situation often encountered in health technology assessment (HTA). Suggestions for empirical prior distributions are available in the literature but it is unknown whether these are adequate in the context of HTA. Therefore, a database of all relevant meta-analyses conducted by the Institute for Quality and Efficiency in Health Care (IQWiG, Germany) was constructed to derive empirical prior distributions for the heterogeneity parameter suitable for HTA. Previously, an extension to the normal-normal hierarchical model had been suggested for this purpose. For different effect measures, this extended model was applied on the database to conservatively derive a prior distribution for the heterogeneity parameter. Comparison of a Bayesian approach using the derived priors with IQWiG's current standard approach for evidence synthesis shows favorable properties. Therefore, these prior distributions are recommended for future meta-analyses in HTA settings and could be embedded into the IQWiG evidence synthesis approach in the case of very few studies.


Subject(s)
Information Dissemination , Technology Assessment, Biomedical , Bayes Theorem , Databases, Factual , Germany
2.
J Clin Epidemiol ; 159: 174-189, 2023 07.
Article in English | MEDLINE | ID: mdl-37263516

ABSTRACT

OBJECTIVES: Previous findings indicate limited reporting of systematic reviews with meta-analyses of time-to-event (TTE) outcomes. We assessed corresponding available information in trial publications included in such meta-analyses. STUDY DESIGN AND SETTING: We extracted data from all randomized trials in pairwise, hazard ratio (HR)-based meta-analyses of primary outcomes and overall survival of 50 systematic reviews systematically identified from the Cochrane Database and Core Clinical Journals. Data on methods and characteristics relevant for TTE analysis of reviews, trials, and outcomes were extracted. RESULTS: Meta-analyses included 235 trials with 315 trial analyses. Most prominently assessed was overall survival (91%). Definitions (61%), censoring reasons (41%), and follow-up specifications (56%) for trial outcomes were often missing. Available TTE data per trial were most frequently survival curves (83%), log-rank P values (76%), and HRs (72%). When trial TTE data recalculation was reported, reviews mostly specified HRs or P values (each 5%). Reviews primarily included intention-to-treat analyses (64%) and analyses not adjusted for covariates (25%). Except for missing outcome data, TTE-relevant trial characteristics, for example, informative censoring, treatment switching, and proportional hazards, were sporadically addressed in trial publications. Reporting limitations in trial publications translate to the review level. CONCLUSION: TTE (meta)-analyses, in trial and review publications, need clear reporting standards.


Subject(s)
Systematic Reviews as Topic , Humans , Data Collection
3.
Int J Technol Assess Health Care ; 39(1): e22, 2023 Apr 25.
Article in English | MEDLINE | ID: mdl-37096439

ABSTRACT

BACKGROUND: Systematic reviews (SRs) are usually conducted by a highly specialized group of researchers. The routine involvement of methodological experts is a core methodological recommendation. The present commentary describes the qualifications required for information specialists and statisticians involved in SRs, as well as their tasks, the methodological challenges they face, and potential future areas of involvement. TASKS AND QUALIFICATIONS: Information specialists select the information sources, develop search strategies, conduct the searches, and report the results. Statisticians select the methods for evidence synthesis, assess the risk of bias, and interpret the results. The minimum requirements for their involvement in SRs are a suitable university degree (e.g., in statistics or librarian/information science or an equivalent degree), methodological and content expertise, and several years of experience. KEY ARGUMENTS: The complexity of conducting SRs has greatly increased due to a massive rise in the amount of available evidence and the number and complexity of SR methods, largely statistical and information retrieval methods. Additional challenges exist in the actual conduct of an SR, such as judging how complex the research question could become and what hurdles could arise during the course of the project. CONCLUSION: SRs are becoming more and more complex to conduct and information specialists and statisticians should routinely be involved right from the start of the SR. This increases the trustworthiness of SRs as the basis for reliable, unbiased and reproducible health policy, and clinical decision making.


Subject(s)
Information Storage and Retrieval , Research Design , Humans , Systematic Reviews as Topic , Information Sources , Information Services
4.
Stat Med ; 42(14): 2439-2454, 2023 06 30.
Article in English | MEDLINE | ID: mdl-37005007

ABSTRACT

In Bayesian meta-analysis, the specification of prior probabilities for the between-study heterogeneity is commonly required, and is of particular benefit in situations where only few studies are included. Among the considerations in the set-up of such prior distributions, the consultation of available empirical data on a set of relevant past analyses sometimes plays a role. How exactly to summarize historical data sensibly is not immediately obvious; in particular, the investigation of an empirical collection of heterogeneity estimates will not target the actual problem and will usually only be of limited use. The commonly used normal-normal hierarchical model for random-effects meta-analysis is extended to infer a heterogeneity prior. Using an example data set, we demonstrate how to fit a distribution to empirically observed heterogeneity data from a set of meta-analyses. Considerations also include the choice of a parametric distribution family. Here, we focus on simple and readily applicable approaches to then translate these into (prior) probability distributions.


Subject(s)
Referral and Consultation , Humans , Bayes Theorem , Data Interpretation, Statistical
5.
BMC Med Res Methodol ; 22(1): 319, 2022 12 13.
Article in English | MEDLINE | ID: mdl-36514000

ABSTRACT

BACKGROUND: Meta-analyses are used to summarise the results of several studies on a specific research question. Standard methods for meta-analyses, namely inverse variance random effects models, have unfavourable properties if only very few (2 - 4) studies are available. Therefore, alternative meta-analytic methods are needed. In the case of binary data, the "common-rho" beta-binomial model has shown good results in situations with sparse data or few studies. The major concern of this model is that it ignores the fact that each treatment arm is paired with a respective control arm from the same study. Thus, the randomisation to a study arm of a specific study is disrespected, which may lead to compromised estimates of the treatment effect. Therefore, we extended this model to a version that respects randomisation. The aim of this simulation study was to compare the "common-rho" beta-binomial model and several other beta-binomial models with standard meta-analyses models, including generalised linear mixed models and several inverse variance random effects models. METHODS: We conducted a simulation study comparing beta-binomial models and various standard meta-analysis methods. The design of the simulation aimed to consider meta-analytic situations occurring in practice. RESULTS: No method performed well in scenarios with only 2 studies in the random effects scenario. In this situation, a fixed effect model or a qualitative summary of the study results may be preferable. In scenarios with 3 or 4 studies, most methods satisfied the nominal coverage probability. The "common-rho" beta-binomial model showed the highest power under the alternative hypothesis. The beta-binomial model respecting randomisation did not improve performance. CONCLUSION: The "common-rho" beta-binomial appears to be a good option for meta-analyses of very few studies. As residual concerns about the consequences of disrespecting randomisation may still exist, we recommend a sensitivity analysis with a standard meta-analysis method that respects randomisation.


Subject(s)
Models, Statistical , Humans , Probability , Linear Models , Computer Simulation
6.
Methods Mol Biol ; 2345: 91-102, 2022.
Article in English | MEDLINE | ID: mdl-34550585

ABSTRACT

This chapter contains a methodological framework for choosing a model for the meta-analysis of very few studies and selecting an estimation method for the chosen model by means of study characteristics and by comparing results yielded by different approaches. When the results are inconclusive between different estimation methods, it might be the best solution to refrain from a quantitative meta-analysis but to summarize the study results by means of a qualitative evidence synthesis.


Subject(s)
Meta-Analysis as Topic , Humans
7.
Res Synth Methods ; 12(4): 448-474, 2021 Jul.
Article in English | MEDLINE | ID: mdl-33486828

ABSTRACT

The normal-normal hierarchical model (NNHM) constitutes a simple and widely used framework for meta-analysis. In the common case of only few studies contributing to the meta-analysis, standard approaches to inference tend to perform poorly, and Bayesian meta-analysis has been suggested as a potential solution. The Bayesian approach, however, requires the sensible specification of prior distributions. While noninformative priors are commonly used for the overall mean effect, the use of weakly informative priors has been suggested for the heterogeneity parameter, in particular in the setting of (very) few studies. To date, however, a consensus on how to generally specify a weakly informative heterogeneity prior is lacking. Here we investigate the problem more closely and provide some guidance on prior specification.


Subject(s)
Bayes Theorem
8.
J Clin Epidemiol ; 129: 126-137, 2021 01.
Article in English | MEDLINE | ID: mdl-33007458

ABSTRACT

OBJECTIVES: To provide Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) guidance for the consideration of study limitations (risk of bias) due to missing participant outcome data for time-to-event outcomes in intervention studies. STUDY DESIGN AND SETTING: We developed this guidance through an iterative process that included membership consultation, feedback, presentation, and iterative discussion at meetings of the GRADE working group. RESULTS: The GRADE working group has published guidance on how to account for missing participant outcome data in binary and continuous outcomes. When analyzing time-to-event outcomes (e.g., overall survival and time-to-treatment failure) data of participants for whom the outcome of interest (e.g., death and relapse) has not been observed are dealt with through censoring. To do so, standard methods require that censored individuals are representative for those remaining in the study. Two types of censoring can be distinguished, end of study censoring and censoring because of missing data, commonly named loss to follow-up censoring. However, both types are not distinguishable with the usual information on censoring available to review authors. Dealing with individuals for whom data are missing during follow-up in the same way as individuals for whom full follow-up is available at the end of the study increases the risk of bias. Considerable differences in the treatment arms in the distribution of censoring over time (early versus late censoring), the overall degree of missing follow-up data, and the reasons why individuals were lost to follow-up may reduce the certainty in the study results. With often only very limited data available, review and guideline authors are required to make transparent and well-considered judgments when judging risk of bias of individual studies and then come to an overall grading decision for the entire body of evidence. CONCLUSION: Concern for risk of bias resulting from censoring of participants for whom follow-up data are missing in the underlying studies of a body of evidence can be expressed in the study limitations (risk of bias) domain of the GRADE approach.


Subject(s)
Clinical Studies as Topic , GRADE Approach , Bias , Clinical Studies as Topic/methods , Clinical Studies as Topic/standards , Humans , Lost to Follow-Up , Outcome Assessment, Health Care/methods , Outcome Assessment, Health Care/organization & administration , Patient Dropouts , Research Design/standards , Risk Assessment
10.
BMC Med Res Methodol ; 20(1): 36, 2020 02 24.
Article in English | MEDLINE | ID: mdl-32093605

ABSTRACT

BACKGROUND: Network meta-analysis (NMA) is becoming increasingly popular in systematic reviews and health technology assessments. However, there is still ambiguity concerning the properties of the estimation approaches as well as for the methods to evaluate the consistency assumption. METHODS: We conducted a simulation study for networks with up to 5 interventions. We investigated the properties of different methods and give recommendations for practical application. We evaluated the performance of 3 different models for complex networks as well as corresponding global methods to evaluate the consistency assumption. The models are the frequentist graph-theoretical approach netmeta, the Bayesian mixed treatment comparisons (MTC) consistency model, and the MTC consistency model with stepwise removal of studies contributing to inconsistency identified in a leverage plot. RESULTS: We found that with a high degree of inconsistency none of the evaluated effect estimators produced reliable results, whereas with moderate or no inconsistency the estimator from the MTC consistency model and the netmeta estimator showed acceptable properties. We also saw a dependency on the amount of heterogeneity. Concerning the evaluated methods to evaluate the consistency assumption, none was shown to be suitable. CONCLUSIONS: Based on our results we recommend a pragmatic approach for practical application in NMA. The estimator from the netmeta approach or the estimator from the Bayesian MTC consistency model should be preferred. Since none of the methods to evaluate the consistency assumption showed satisfactory results, users should have a strong focus on the similarity as well as the homogeneity assumption.


Subject(s)
Algorithms , Computer Simulation , Models, Theoretical , Network Meta-Analysis , Technology Assessment, Biomedical/methods , Antidepressive Agents/therapeutic use , Depression/drug therapy , Humans , Outcome Assessment, Health Care/methods , Reproducibility of Results
11.
J Clin Epidemiol ; 118: 124-131, 2020 02.
Article in English | MEDLINE | ID: mdl-31711910

ABSTRACT

OBJECTIVES: To provide GRADE guidance on how to prepare Summary of Findings tables and Evidence Profiles for time-to-event outcomes with a focus on the calculation of the corresponding absolute effect estimates. STUDY DESIGN AND SETTING: This guidance was justified by a research project identifying frequent errors and limitations in the presentation of time-to-event outcomes in the Summary of Findings tables. We developed this guidance through an iterative process that included membership consultation, feedback, presentation, and discussion at meetings of the GRADE Working Group. RESULTS: Review authors need to carefully consider the definition of the outcome of interest; although often the event is used as label for the outcome of interest (e.g., death or mortality), the event-free survival (e.g., overall survival) is reported throughout individual studies. Review authors should calculate the absolute effect correctly, either for the event or absence of the event. We also provide examples on how to calculate the absolute effects for events and the absence of events for various baseline or control group risks and time points. CONCLUSIONS: This article aids in the development of Summary of Findings tables and Evidence Profiles, including time-to-event outcomes, and addresses the most common scenarios when calculating absolute effects in order to provide an accurate interpretation.


Subject(s)
Endpoint Determination/standards , Research Report/standards , Data Collection/standards , Data Interpretation, Statistical , Evidence-Based Medicine , Guidelines as Topic , Humans , Systematic Reviews as Topic
12.
Trials ; 20(1): 485, 2019 Aug 08.
Article in English | MEDLINE | ID: mdl-31395087

ABSTRACT

BACKGROUND: Incidence density ratios (IDRs) are frequently used to account for varying follow-up times when comparing the risks of adverse events in two treatment groups. The validity of the IDR as approximation of the hazard ratio (HR) is unknown in the situation of differential average follow up by treatment group and non-constant hazard functions. Thus, the use of the IDR when individual patient data are not available might be questionable. METHODS: A simulation study was performed using various survival-time distributions with increasing and decreasing hazard functions and various situations of differential follow up by treatment group. HRs and IDRs were estimated from the simulated survival times and compared with the true HR. A rule of thumb was derived to decide in which data situations the IDR can be used as approximation of the HR. RESULTS: The results show that the validity of the IDR depends on the survival-time distribution, the difference between the average follow-up durations, the baseline risk, and the sample size. For non-constant hazard functions, the IDR is only an adequate approximation of the HR if the average follow-up durations of the groups are equal and the baseline risk is not larger than 25%. In the case of large differences in the average follow-up durations between the groups and non-constant hazard functions, the IDR represents no valid approximation of the HR. CONCLUSIONS: The proposed rule of thumb allows the use of the IDR as approximation of the HR in specific data situations, when it is not possible to estimate the HR by means of adequate survival-time methods because the required individual patient data are not available. However, in general, adequate survival-time methods should be used to analyze adverse events rather than the simple IDR.


Subject(s)
Incidence , Proportional Hazards Models , Computer Simulation , Humans , Probability
13.
J Clin Epidemiol ; 111: 11-22, 2019 07.
Article in English | MEDLINE | ID: mdl-30905696

ABSTRACT

OBJECTIVE: The objective of this study was to present ways to graphically represent a number needed to treat (NNT) in (network) meta-analysis (NMA). STUDY DESIGN AND SETTING: A barrier to using NNT in NMA when an odds ratio (OR) or risk ratio (RR) is used is the determination of a single control event rate (CER). We discuss approaches to calculate a CER, and illustrate six graphical methods for NNT from NMA. We illustrate the graphical approaches using an NMA of cognitive enhancers for Alzheimer's dementia. RESULTS: The NNT calculation using a relative effect measure, such as OR and RR, requires a CER value, but different CERs, including mean CER across studies, pooled CER in meta-analysis, and expert opinion-based CER may result in different NNTs. An NNT from NMA can be presented in a bar plot, Cates plot, or forest plot for a single outcome, and a bubble plot, scatterplot, or rank-heat plot for ≥2 outcomes. Each plot is associated with different properties and can serve different needs. CONCLUSION: Caution is needed in NNT interpretation, as considerations such as selection of effect size and CER, and CER assumption across multiple comparisons, may impact NNT and decision-making. The proposed graphs are helpful to interpret NNTs calculated from (network) meta-analyses.


Subject(s)
Computer Graphics , Network Meta-Analysis , Numbers Needed To Treat/statistics & numerical data
14.
Res Synth Methods ; 10(1): 23-43, 2019 Mar.
Article in English | MEDLINE | ID: mdl-30129707

ABSTRACT

Meta-analyses are an important tool within systematic reviews to estimate the overall effect size and its confidence interval for an outcome of interest. If heterogeneity between the results of the relevant studies is anticipated, then a random-effects model is often preferred for analysis. In this model, a prediction interval for the true effect in a new study also provides additional useful information. However, the DerSimonian and Laird method-frequently used as the default method for meta-analyses with random effects-has been long challenged due to its unfavorable statistical properties. Several alternative methods have been proposed that may have better statistical properties in specific scenarios. In this paper, we aim to provide a comprehensive overview of available methods for calculating point estimates, confidence intervals, and prediction intervals for the overall effect size under the random-effects model. We indicate whether some methods are preferable than others by considering the results of comparative simulation and real-life data studies.


Subject(s)
Meta-Analysis as Topic , Research Design , Algorithms , Bayes Theorem , Computer Simulation , Data Interpretation, Statistical , Decision Making , Humans , Likelihood Functions , Models, Statistical , Random Allocation , Reproducibility of Results , Sample Size , Uncertainty
16.
Res Synth Methods ; 9(3): 382-392, 2018 Sep.
Article in English | MEDLINE | ID: mdl-29504289

ABSTRACT

In systematic reviews, meta-analyses are routinely applied to summarize the results of the relevant studies for a specific research question. If one can assume that in all studies the same true effect is estimated, the application of a meta-analysis with common effect (commonly referred to as fixed-effect meta-analysis) is adequate. If between-study heterogeneity is expected to be present, the method of choice is a meta-analysis with random effects. The widely used DerSimonian and Laird method for meta-analyses with random effects has been criticized due to its unfavorable statistical properties, especially in the case of very few studies. A working group of the Cochrane Collaboration recommended the use of the Knapp-Hartung method for meta-analyses with random effects. However, as heterogeneity cannot be reliably estimated if only very few studies are available, the Knapp-Hartung method, while correctly accounting for the corresponding uncertainty, has very low power. Our aim is to summarize possible methods to perform meaningful evidence syntheses in the situation with only very few (ie, 2-4) studies. Some general recommendations are provided on which method should be used when. Our recommendations are based on the existing literature on methods for meta-analysis with very few studies and consensus of the authors. The recommendations are illustrated by 2 examples coming from dossier assessments of the Institute for Quality and Efficiency in Health Care.


Subject(s)
Computer Simulation , Data Interpretation, Statistical , Meta-Analysis as Topic , Uncertainty , Abatacept/pharmacology , Algorithms , Bayes Theorem , Cyclosporine/pharmacology , Female , Humans , Immunosuppressive Agents/pharmacology , Kidney Transplantation , Male , Models, Statistical , Neoplasm Metastasis , Prostatic Neoplasms, Castration-Resistant/drug therapy , Quality of Health Care , Renal Insufficiency/surgery , Statistics as Topic
17.
Biom J ; 58(6): 1428-1444, 2016 Nov.
Article in English | MEDLINE | ID: mdl-27546483

ABSTRACT

For the calculation of relative measures such as risk ratio (RR) and odds ratio (OR) in a single study, additional approaches are required for the case of zero events. In the case of zero events in one treatment arm, the Peto odds ratio (POR) can be calculated without continuity correction, and is currently the relative effect estimation method of choice for binary data with rare events. The aim of this simulation study is a variegated comparison of the estimated OR and estimated POR with the true OR in a single study with two parallel groups without confounders in data situations where the POR is currently recommended. This comparison was performed by means of several performance measures, that is the coverage, confidence interval (CI) width, mean squared error (MSE), and mean percentage error (MPE). We demonstrated that the estimator for the POR does not outperform the estimator for the OR for all the performance measures investigated. In the case of rare events, small treatment effects and similar group sizes, we demonstrated that the estimator for the POR performed better than the estimator for the OR only regarding the coverage and MPE, but not the CI width and MSE. For larger effects and unbalanced group size ratios, the coverage and MPE of the estimator for the POR were inappropriate. As in practice the true effect is unknown, the POR method should be applied only with the utmost caution.


Subject(s)
Biometry/methods , Models, Statistical , Computer Simulation , Humans , Odds Ratio , Risk
19.
Pharm Stat ; 15(4): 292-6, 2016 Jul.
Article in English | MEDLINE | ID: mdl-26928768

ABSTRACT

The analysis of adverse events plays an important role in the benefit assessment of drugs. Consequently, results on adverse events are an integral part of reimbursement dossiers submitted by pharmaceutical companies to health policy decision-makers. Methods applied in the analysis of adverse events commonly include simple standard methods for contingency tables. However, the results produced may be misleading if observations are censored at the time of discontinuation due to treatment switching or noncompliance, resulting in unequal follow-up periods. In this paper, we present examples to show that the application of inadequate methods for the analysis of adverse events in the reimbursement dossier can lead to a downgrading of the evidence on a drug's benefit in the subsequent assessment, as greater harm from the drug cannot be excluded with sufficient certainty. Legal regulations on the benefit assessment of drugs in Germany are presented, in particular, with regard to the analysis of adverse events. Differences in safety considerations between the drug approval process and the benefit assessment are discussed. We show that the naive application of simple proportions in reimbursement dossiers frequently leads to uninterpretable results if observations are censored and the average follow-up periods differ between treatment groups. Likewise, the application of incidence rates may be misleading in the case of recurrent events and unequal follow-up periods. To allow for an appropriate benefit assessment of drugs, adequate survival time methods accounting for time dependencies and duration of follow-up are required, not only for time-to-event efficacy endpoints but also for adverse events. © 2016 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.


Subject(s)
Antineoplastic Agents/adverse effects , Drug Approval/statistics & numerical data , Drug Industry/statistics & numerical data , Drug-Related Side Effects and Adverse Reactions/mortality , Adverse Drug Reaction Reporting Systems/statistics & numerical data , Androstenes/adverse effects , Clinical Trials as Topic/methods , Clinical Trials as Topic/statistics & numerical data , Drug Approval/methods , Drug Industry/methods , Drug-Related Side Effects and Adverse Reactions/diagnosis , Female , Follow-Up Studies , Germany/epidemiology , Humans , Male , Piperidines/adverse effects , Prostatic Neoplasms/drug therapy , Prostatic Neoplasms/mortality , Quinazolines/adverse effects , Survival Rate/trends , Thyroid Neoplasms/drug therapy , Thyroid Neoplasms/mortality
20.
Biom J ; 58(1): 43-58, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26134089

ABSTRACT

At the beginning of 2011, the early benefit assessment of new drugs was introduced in Germany with the Act on the Reform of the Market for Medicinal Products (AMNOG). The Federal Joint Committee (G-BA) generally commissions the Institute for Quality and Efficiency in Health Care (IQWiG) with this type of assessment, which examines whether a new drug shows an added benefit (a positive patient-relevant treatment effect) over the current standard therapy. IQWiG is required to assess the extent of added benefit on the basis of a dossier submitted by the pharmaceutical company responsible. In this context, IQWiG was faced with the task of developing a transparent and plausible approach for operationalizing how to determine the extent of added benefit. In the case of an added benefit, the law specifies three main extent categories (minor, considerable, major). To restrict value judgements to a minimum in the first stage of the assessment process, an explicit and abstract operationalization was needed. The present paper is limited to the situation of binary data (analysis of 2 × 2 tables), using the relative risk as an effect measure. For the treatment effect to be classified as a minor, considerable, or major added benefit, the methodological approach stipulates that the (two-sided) 95% confidence interval of the effect must exceed a specified distance to the zero effect. In summary, we assume that our approach provides a robust, transparent, and thus predictable foundation to determine minor, considerable, and major treatment effects on binary outcomes in the early benefit assessment of new drugs in Germany. After a decision on the added benefit of a new drug by G-BA, the classification of added benefit is used to inform pricing negotiations between the umbrella organization of statutory health insurance and the pharmaceutical companies.


Subject(s)
Biometry/methods , Drug Approval , Drug Therapy , Drug Industry/legislation & jurisprudence , Government Regulation , Humans , Monte Carlo Method , Risk Assessment
SELECTION OF CITATIONS
SEARCH DETAIL
...